88 research outputs found

    A Self-learning Algebraic Multigrid Method for Extremal Singular Triplets and Eigenpairs

    Full text link
    A self-learning algebraic multigrid method for dominant and minimal singular triplets and eigenpairs is described. The method consists of two multilevel phases. In the first, multiplicative phase (setup phase), tentative singular triplets are calculated along with a multigrid hierarchy of interpolation operators that approximately fit the tentative singular vectors in a collective and self-learning manner, using multiplicative update formulas. In the second, additive phase (solve phase), the tentative singular triplets are improved up to the desired accuracy by using an additive correction scheme with fixed interpolation operators, combined with a Ritz update. A suitable generalization of the singular value decomposition is formulated that applies to the coarse levels of the multilevel cycles. The proposed algorithm combines and extends two existing multigrid approaches for symmetric positive definite eigenvalue problems to the case of dominant and minimal singular triplets. Numerical tests on model problems from different areas show that the algorithm converges to high accuracy in a modest number of iterations, and is flexible enough to deal with a variety of problems due to its self-learning properties.Comment: 29 page

    A Nonlinear GMRES Optimization Algorithm for Canonical Tensor Decomposition

    Full text link
    A new algorithm is presented for computing a canonical rank-R tensor approximation that has minimal distance to a given tensor in the Frobenius norm, where the canonical rank-R tensor consists of the sum of R rank-one components. Each iteration of the method consists of three steps. In the first step, a tentative new iterate is generated by a stand-alone one-step process, for which we use alternating least squares (ALS). In the second step, an accelerated iterate is generated by a nonlinear generalized minimal residual (GMRES) approach, recombining previous iterates in an optimal way, and essentially using the stand-alone one-step process as a preconditioner. In particular, the nonlinear extension of GMRES is used that was proposed by Washio and Oosterlee in [ETNA Vol. 15 (2003), pp. 165-185] for nonlinear partial differential equation problems. In the third step, a line search is performed for globalization. The resulting nonlinear GMRES (N-GMRES) optimization algorithm is applied to dense and sparse tensor decomposition test problems. The numerical tests show that ALS accelerated by N-GMRES may significantly outperform both stand-alone ALS and a standard nonlinear conjugate gradient optimization method, especially when highly accurate stationary points are desired for difficult problems. The proposed N-GMRES optimization algorithm is based on general concepts and may be applied to other nonlinear optimization problems

    The influence of societal individualism on a century of tobacco use: modelling the prevalence of smoking

    Get PDF
    Smoking of tobacco is predicted to cause approximately six million deaths worldwide in 2014. Responding effectively to this epidemic requires a thorough understanding of how smoking behaviour is transmitted and modified. Here, we present a new mathematical model of the social dynamics that cause cigarette smoking to spread in a population. Our model predicts that more individualistic societies will show faster adoption and cessation of smoking. Evidence from a new century-long composite data set on smoking prevalence in 25 countries supports the model, with direct implications for public health interventions around the world. Our results suggest that differences in culture between societies can measurably affect the temporal dynamics of a social spreading process, and that these effects can be understood via a quantitative mathematical model matched to observations

    Linear Asymptotic Convergence of Anderson Acceleration: Fixed-Point Analysis

    Full text link
    We study the asymptotic convergence of AA(mm), i.e., Anderson acceleration with window size mm for accelerating fixed-point methods xk+1=q(xk)x_{k+1}=q(x_{k}), xk∈Rnx_k \in R^n. Convergence acceleration by AA(mm) has been widely observed but is not well understood. We consider the case where the fixed-point iteration function q(x)q(x) is differentiable and the convergence of the fixed-point method itself is root-linear. We identify numerically several conspicuous properties of AA(mm) convergence: First, AA(mm) sequences {xk}\{x_k\} converge root-linearly but the root-linear convergence factor depends strongly on the initial condition. Second, the AA(mm) acceleration coefficients β(k)\beta^{(k)} do not converge but oscillate as {xk}\{x_k\} converges to x∗x^*. To shed light on these observations, we write the AA(mm) iteration as an augmented fixed-point iteration zk+1=Ψ(zk)z_{k+1} =\Psi(z_k), zk∈Rn(m+1)z_k \in R^{n(m+1)} and analyze the continuity and differentiability properties of Ψ(z)\Psi(z) and β(z)\beta(z). We find that the vector of acceleration coefficients β(z)\beta(z) is not continuous at the fixed point z∗z^*. However, we show that, despite the discontinuity of β(z)\beta(z), the iteration function Ψ(z)\Psi(z) is Lipschitz continuous and directionally differentiable at z∗z^* for AA(1), and we generalize this to AA(mm) with m>1m>1 for most cases. Furthermore, we find that Ψ(z)\Psi(z) is not differentiable at z∗z^*. We then discuss how these theoretical findings relate to the observed convergence behaviour of AA(mm). The discontinuity of β(z)\beta(z) at z∗z^* allows β(k)\beta^{(k)} to oscillate as {xk}\{x_k\} converges to x∗x^*, and the non-differentiability of Ψ(z)\Psi(z) allows AA(mm) sequences to converge with root-linear convergence factors that strongly depend on the initial condition. Additional numerical results illustrate our findings
    • …
    corecore